This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks, including question answering, natural language inference, word sense disambiguation, coreference resolution, and reasoning. [Method] Instead of arbitrarily increasing the size of a pretrained language model (PLM), our aim is to 1) fully extract knowledge from the input pretraining data given a certain parameter budget, e.g., 6B, and 2) effectively transfer this knowledge to downstream tasks. To achieve goal 1), we propose self-evolution learning for PLMs to wisely predict the informative tokens that should be masked, and supervise the masked language modeling (MLM) process with rectified smooth labels. For goal 2), we leverage the prompt transfer technique to improve the low-resource tasks by transferring the knowledge from the foundation model and related downstream tasks to the target task. [Results] According to our submission record (Oct. 2022), with our optimized pretraining and fine-tuning strategies, our 6B Vega method achieved new state-of-the-art performance on 4/8 tasks, sitting atop the SuperGLUE leaderboard on Oct. 8, 2022, with an average score of 91.3.
translated by 谷歌翻译
Noisy Student Training (NST) has recently demonstrated extremely strong performance in Automatic Speech Recognition (ASR). In this paper, we propose a data selection strategy named LM Filter to improve the performances of NST on non-target domain data in ASR tasks. Hypothesis with and without Language Model are generated and CER differences between them are utilized as a filter threshold. Results reveal that significant improvements of 10.4% compared with no data filtering baselines. We can achieve 3.31% CER in AISHELL-1 test set, which is best result from our knowledge without any other supervised data. We also perform evaluations on supervised 1000 hour AISHELL-2 dataset and competitive results of 4.72% CER can be achieved.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The explosive growth of dynamic and heterogeneous data traffic brings great challenges for 5G and beyond mobile networks. To enhance the network capacity and reliability, we propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme that adaptively allocates the uplink and downlink time-frequency resources of base stations (BSs) to meet the asymmetric and heterogeneous traffic demands while alleviating the inter-cell interference. We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP) that maximizes the long-term expected sum rate under the users' packet dropping ratio constraints. In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named federated Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm. The BSs decide their local time-frequency configurations through RL algorithms and achieve global training via exchanging local RL models with their neighbors under a decentralized federated learning framework. Specifically, to deal with the large-scale discrete action space of each BS, we adopt a DDPG-based algorithm to generate actions in a continuous space, and then utilize Wolpertinger policy to reduce the mapping errors from continuous action space back to discrete action space. Simulation results demonstrate the superiority of our proposed algorithm to benchmark algorithms with respect to system sum rate.
translated by 谷歌翻译
Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.
translated by 谷歌翻译
深度学习(DL)的快速增长和部署目睹了新兴的隐私和安全问题。为了减轻这些问题,已经讨论了安全的多方计算(MPC),以实现隐私保护DL计算。在实践中,它们通常是在很高的计算和沟通开销中,并有可能禁止其在大规模系统中的受欢迎程度。两种正交研究趋势吸引了人们对安全深度学习的能源效率的巨大兴趣,即MPC比较方案的高架降低和硬件加速度。但是,他们要么达到较低的减少比率,因此由于计算和通信节省有限而遭受了高潜伏期,或者是渴望的,因为现有的作品主要集中在CPU和GPU等一般计算平台上。在这项工作中,作为第一次尝试,我们通过将加密构件构建块的硬件延迟整合到DNN损耗功能中,以实现高能量效率,开发了一个系统的polympcnet,以减少MPC比较协议和硬件加速的联合额外降低的系统框架Polympcnet。和安全保证。我们的关键设计原理不是在DNN进行良好训练之后(通过删除或删除某些非物质操作员)训练(通过删除或删除某些非物质操作员)之后检查模型敏感性,而是要准确地执行DNN设计中的假设 - 培训DNN既是DNN都硬件有效且安全,同时逃脱了当地的最小值和鞍点并保持高精度。更具体地说,我们提出了通过多项式激活初始化方法直接提出的加密硬件友好的可训练多项式激活功能,以替代昂贵的2P-RELU操作员。我们开发了一个密码硬件调度程序和现场可编程门阵列(FPGA)平台的相应性能模型。
translated by 谷歌翻译
尖峰神经网络(SNN)因其高能量效率和分类性能的最新进展而引起了很多关注。但是,与传统的深度学习方法不同,对SNN对对抗性例子的鲁棒性的分析和研究仍然相对欠发达。在这项工作中,我们通过实验和分析三个重要的SNN安全属性来推进对抗机器学习的领域。首先,我们表明对SNN的成功白盒对抗性攻击高度依赖于潜在的替代梯度技术。其次,我们分析了SNN和其他最先进的体系结构(如视觉变压器和大型传输CNN)生成的对抗性示例的可传递性。我们证明,SNN并不经常被视觉变压器和某些类型的CNN产生的对抗典范所欺骗。最后,我们开发了一种新颖的白盒攻击,该攻击生成了能够同时欺骗SNN模型和非SNN模型的对抗性示例。我们的实验和分析是广泛而严格的,涵盖了两个数据集(CIFAR-10和CIFAR-100),五种不同的白色盒子攻击以及十二个不同的分类器模型。
translated by 谷歌翻译
多模式学习,尤其是大规模的多模式预训练,在过去的几年中已经迅速发展,并带来了人工智能(AI)的最大进步。尽管具有有效性,但了解多模式预训练模型的潜在机制仍然是一个巨大的挑战。揭示此类模型的解释性可能会使AI领域中新型学习范式的突破。为此,鉴于人脑的多模式性质,我们建议借助非侵入性脑成像技术(例如功能磁共振成像(fMRI))探索多模式学习模型的解释性。具体而言,我们首先提出了1500万个图像文本对预训练的新设计的多模式基础模型,该模型在各种认知下游任务中显示出强烈的多模式理解和概括能力。此外,从神经编码的角度来看(基于我们的基础模型),我们发现,与单峰相比,经过多模式训练的视觉和舌编码器都更像脑状。特别是,我们确定了许多大脑区域,其中多模式训练的编码器表现出更好的神经编码性能。这与现有有关探索大脑多感觉整合的研究的发现是一致的。因此,我们认为,多模式基础模型是神经科学家研究人脑中多模式信号处理机制的更合适的工具。我们的发现还证明了多模式基础模型作为理想的计算模拟器的潜力,以促进脑和大脑的AI研究。
translated by 谷歌翻译
变压器被认为是自2018年以来最重要的深度学习模型之一,部分原因是它建立了最先进的记录(SOTA)记录,并有可能取代现有的深神经网络(DNNS)。尽管取得了显着的胜利,但变压器模型的延长周转时间是公认的障碍。序列长度的多样性施加了其他计算开销,其中需要将输入零填充到批处理中的最大句子长度,以容纳并行计算平台。本文针对现场可编程的门阵列(FPGA),并提出了一个连贯的序列长度自适应算法 - 硬件与变压器加速度的共同设计。特别是,我们开发了一个适合硬件的稀疏注意操作员和长度意识的硬件资源调度算法。提出的稀疏注意操作员将基于注意力的模型的复杂性降低到线性复杂性,并减轻片外记忆流量。提出的长度感知资源硬件调度算法动态分配了硬件资源以填充管道插槽并消除了NLP任务的气泡。实验表明,与CPU和GPU实施相比,我们的设计准确度损失很小,并且具有80.2 $ \ times $和2.6 $ \ times $速度,并且比先进的GPU加速器高4 $ \ times $ $ $ \ times $通过Cublas Gemm优化。
translated by 谷歌翻译
物联网设备越来越多地通过神经网络模型实施,以启用智能应用程序。从环境环境中收集能源的能源收集(EH)技术是电池可为这些设备供电的有前途的替代方法,因为维护成本较低和能源的广泛可用性。但是,能量收割机提供的功率很低,并且具有不稳定性的固有缺点,因为它随环境环境而变化。本文提出了EVE,EVE是一种自动化机器学习(AUTOML)共同探索框架,以搜索具有共享权重的所需的多模型,以进行能源收集的物联网设备。这些共享模型显着降低了记忆足迹,具有不同级别的模型稀疏性,延迟和准确性,以适应环境变化。进一步开发了有效的实施实施体系结构,以有效地执行设备上的每个模型。提出了一种运行时模型提取算法,该算法在触发特定模型模式时以可忽略的开销检索单个模型。实验结果表明,EVE生成的神经网络模型平均比没有修剪和共享的基线模型快2.5倍倍权重。
translated by 谷歌翻译